13 research outputs found

    Motion Segmentation and Egomotion Estimation with Event-Based Cameras

    Get PDF
    Computer vision has been dominated by classical, CMOS frame-based imaging sensors for many years. Yet, motion is not well represented in classical cameras and vision techniques - a consequence of traditional vision being frame-based and only existing 'in the moment' while motion is a continuous entity. With the introduction of neuromorphic hardware, such as the event-based cameras, we are ready to cross the bridge of frame based vision and develop a new concept - motion-based vision. The event-based sensor provides dense temporal information about changes on the scene - it can ‘see’ the motion at an equivalent of almost infinite framerate, making a perfect fit for creating dense, long term motion trajectories and allowing for a significantly more efficient, generic and at the same time accurate motion perception. By its design, an event-based sensor accommodates a large dynamic range, provides high temporal resolution and low latency -- ideal properties for applications where high quality motion estimation and tolerance towards challenging lighting conditions are desirable. The price for these properties is indeed heavy - event-based sensors produce a lot of noise, their resolution is relatively low and their data - typically referred to as event cloud - is asynchronous and sparse. Event sensors offer new opportunities for robust visual perception so much needed in autonomous robotics, but challenges associated with the sensor output, such as high noise, relatively low spatial resolution and sparsity, ask for different visual processing approaches. In this dissertation we develop methods and frameworks for motion segmentation and egomotion estimation on event-based data, starting with a simple optimization-based approach for camera motion compensation and object tracking and continuing by developing several deep learning pipelines, while continuing to explore the connection between the shapes of the event clouds and scene motion. We collect EV-IMO - the first pixelwise-annotated dataset for motion segmentation for event cameras and propose a 3D graph-based learning approach for motion segmentation in (x, y, t) domain. Finally we develop a set of mathematical constraints for event streams which leverage their temporal density and connect the shape of the event cloud with camera and object motion

    EV-IMO: Motion Segmentation Dataset and Learning Pipeline for Event Cameras

    Full text link
    We present the first event-based learning approach for motion segmentation in indoor scenes and the first event-based dataset - EV-IMO - which includes accurate pixel-wise motion masks, egomotion and ground truth depth. Our approach is based on an efficient implementation of the SfM learning pipeline using a low parameter neural network architecture on event data. In addition to camera egomotion and a dense depth map, the network estimates pixel-wise independently moving object segmentation and computes per-object 3D translational velocities for moving objects. We also train a shallow network with just 40k parameters, which is able to compute depth and egomotion. Our EV-IMO dataset features 32 minutes of indoor recording with up to 3 fast moving objects simultaneously in the camera field of view. The objects and the camera are tracked by the VICON motion capture system. By 3D scanning the room and the objects, accurate depth map ground truth and pixel-wise object masks are obtained, which are reliable even in poor lighting conditions and during fast motion. We then train and evaluate our learning pipeline on EV-IMO and demonstrate that our approach far surpasses its rivals and is well suited for scene constrained robotics applications.Comment: 8 pages, 6 figures. Submitted to 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2019

    EV-IMO: Motion Segmentation Dataset and Learning Pipeline for Event Cameras

    Full text link
    We present the first event-based learning approach for motion segmentation in indoor scenes and the first event-based dataset - EV-IMO - which includes accurate pixel-wise motion masks, egomotion and ground truth depth. Our approach is based on an efficient implementation of the SfM learning pipeline using a low parameter neural network architecture on event data. In addition to camera egomotion and a dense depth map, the network estimates pixel-wise independently moving object segmentation and computes per-object 3D translational velocities for moving objects. We also train a shallow network with just 40k parameters, which is able to compute depth and egomotion. Our EV-IMO dataset features 32 minutes of indoor recording with up to 3 fast moving objects simultaneously in the camera field of view. The objects and the camera are tracked by the VICON motion capture system. By 3D scanning the room and the objects, accurate depth map ground truth and pixel-wise object masks are obtained, which are reliable even in poor lighting conditions and during fast motion. We then train and evaluate our learning pipeline on EV-IMO and demonstrate that our approach far surpasses its rivals and is well suited for scene constrained robotics applications

    EVIMO2: An Event Camera Dataset for Motion Segmentation, Optical Flow, Structure from Motion, and Visual Inertial Odometry in Indoor Scenes with Monocular or Stereo Algorithms

    Full text link
    A new event camera dataset, EVIMO2, is introduced that improves on the popular EVIMO dataset by providing more data, from better cameras, in more complex scenarios. As with its predecessor, EVIMO2 provides labels in the form of per-pixel ground truth depth and segmentation as well as camera and object poses. All sequences use data from physical cameras and many sequences feature multiple independently moving objects. Typically, such labeled data is unavailable in physical event camera datasets. Thus, EVIMO2 will serve as a challenging benchmark for existing algorithms and rich training set for the development of new algorithms. In particular, EVIMO2 is suited for supporting research in motion and object segmentation, optical flow, structure from motion, and visual (inertial) odometry in both monocular or stereo configurations. EVIMO2 consists of 41 minutes of data from three 640×\times480 event cameras, one 2080×\times1552 classical color camera, inertial measurements from two six axis inertial measurement units, and millimeter accurate object poses from a Vicon motion capture system. The dataset's 173 sequences are arranged into three categories. 3.75 minutes of independently moving household objects, 22.55 minutes of static scenes, and 14.85 minutes of basic motions in shallow scenes. Some sequences were recorded in low-light conditions where conventional cameras fail. Depth and segmentation are provided at 60 Hz for the event cameras and 30 Hz for the classical camera. The masks can be regenerated using open-source code up to rates as high as 200 Hz. This technical report briefly describes EVIMO2. The full documentation is available online. Videos of individual sequences can be sampled on the download page.Comment: 5 pages, 3 figures, 1 tabl

    Use of Respiratory Protection Devices by Medical Students during the COVID-19 Pandemic

    No full text
    The use of face masks has assumed a leading spot among nonspecific prevention measures during the coronavirus pandemic. The effectiveness of this protective measure depends on the specifics of individual use. The purpose of our study was to analyze the use of respiratory protective equipment (RPE) by medical students during the COVID-19 pandemic. The evaluation of face mask use was based on the results of a survey of medical students at Sechenov University. There were 988 participants in the study: 97.5% used RPE during the pandemic, 89.1% used disposable medical and hygienic face masks, 27.4% used reusable cloth face masks, and 13.2% used respirators. The majority of respondents (75.2%) were found to wear face masks correctly. However, 17.0% of the respondents were found to cover only their mouths with a face mask, while 7.8% reported often shifting their face mask under the chin due to perceived discomfort. Only 25.1% of respondents changed their disposable face mask after 2–3 h of wearing, while 13.0% decontaminated and used it several times. Most cloth face mask users (93.7%) decontaminated their marks, but only 55.7% of respondents did so daily. Face masks were most often worn in medical organizations (91.5%), and 1.4% of respondents did not use respiratory protection anywhere. In conclusion, we consider it necessary to introduce a special module on nonspecific prevention in the discipline of hygiene

    The Role of Chloroviruses as Possible Infectious Agents for Human Health: Putative Mechanisms of ATCV-1 Infection and Potential Routes of Transmission

    No full text
    The Chlorovirus genus of the Phycodnaviridae family includes large viruses with a double-stranded DNA genome. Chloroviruses are widely distributed in freshwater bodies around the world and have been isolated from freshwater sources in Europe, Asia, Australia, and North and South America. One representative of chloroviruses is Acanthocystis turfacea chlorella virus 1 (ATCV-1), which is hosted by Chlorella heliozoae. A few publications in the last ten years about the potential effects of ATCV-1 on the human brain sparked interest among specialists in the field of human infectious pathology. The goal of our viewpoint was to compile the scant research on the effects of ATCV-1 on the human body, to demonstrate the role of chloroviruses as new possible infectious agents for human health, and to indicate potential routes of virus transmission. We believe that ATCV-1 transmission routes remain unexplored. We also question whether chlorella-based nutritional supplements are dangerous for ATCV-1 infections. Further research will help to identify the routes of infection, the cell types in which ATCV-1 can persist, and the pathological mechanisms of the virus’s effect on the human body

    The Role of Different Types of microRNA in the Pathogenesis of Breast and Prostate Cancer

    No full text
    Micro ribonucleic acids (microRNAs or miRNAs) form a distinct subtype of non-coding RNA and are widely recognized as one of the most significant gene expression regulators in mammalian cells. Mechanistically, the regulation occurs through microRNA binding with its response elements in the 3’-untranslated region of target messenger RNAs (mRNAs), resulting in the post-transcriptional silencing of genes, expressing target mRNAs. Compared to small interfering RNAs, microRNAs have more complex regulatory patterns, making them suitable for fine-tuning gene expressions in different tissues. Dysregulation of microRNAs is well known as one of the causative factors in malignant cell growth. Today, there are numerous data points regarding microRNAs in different cancer transcriptomes, the specificity of microRNA expression changes in various tissues, and the predictive value of specific microRNAs as cancer biomarkers. Breast cancer (BCa) is the most common cancer in women worldwide and seriously impairs patients’ physical health. Its incidence has been predicted to rise further. Mounting evidence indicates that microRNAs play key roles in tumorigenesis and development. Prostate cancer (PCa) is one of the most commonly diagnosed cancers in men. Different microRNAs play an important role in PCa. Early diagnosis of BCa and PCa using microRNAs is very useful for improving individual outcomes in the framework of predictive, preventive, and personalized (3P) medicine, thereby reducing the economic burden. This article reviews the roles of different types of microRNA in BCa and PCa progression
    corecore